100 research outputs found

    Convergence of a Second Order Markov Chain

    Full text link
    In this paper, we consider convergence properties of a second order Markov chain. Similar to a column stochastic matrix is associated to a Markov chain, a so called {\em transition probability tensor} PP of order 3 and dimension nn is associated to a second order Markov chain with nn states. For this PP, define FPF_P as FP(x):=Px2F_P(x):=Px^{2} on the n1n-1 dimensional standard simplex Δn\Delta_n. If 1 is not an eigenvalue of FP\nabla F_P on Δn\Delta_n and PP is irreducible, then there exists a unique fixed point of FPF_P on Δn\Delta_n. In particular, if every entry of PP is greater than 12n\frac{1}{2n}, then 1 is not an eigenvalue of FP\nabla F_P on Δn\Delta_n. Under the latter condition, we further show that the second order power method for finding the unique fixed point of FPF_P on Δn\Delta_n is globally linearly convergent and the corresponding second order Markov process is globally RR-linearly convergent.Comment: 16 pages, 3 figure

    The E-Eigenvectors of Tensors

    Full text link
    We first show that the eigenvector of a tensor is well-defined. The differences between the eigenvectors of a tensor and its E-eigenvectors are the eigenvectors on the nonsingular projective variety S={xPn    i=0nxi2=0}\mathbb S=\{\mathbf x\in\mathbb P^n\;|\;\sum\limits_{i=0}^nx_i^2=0\}. We show that a generic tensor has no eigenvectors on S\mathbb S. Actually, we show that a generic tensor has no eigenvectors on a proper nonsingular projective variety in Pn\mathbb P^n. By these facts, we show that the coefficients of the E-characteristic polynomial are algebraically dependent. Actually, a certain power of the determinant of the tensor can be expressed through the coefficients besides the constant term. Hence, a nonsingular tensor always has an E-eigenvector. When a tensor T\mathcal T is nonsingular and symmetric, its E-eigenvectors are exactly the singular points of a class of hypersurfaces defined by T\mathcal T and a parameter. We give explicit factorization of the discriminant of this class of hypersurfaces, which completes Cartwright and Strumfels' formula. We show that the factorization contains the determinant and the E-characteristic polynomial of the tensor T\mathcal T as irreducible factors.Comment: 17 page

    The Largest Laplacian and Signless Laplacian H-Eigenvalues of a Uniform Hypergraph

    Full text link
    In this paper, we show that the largest Laplacian H-eigenvalue of a kk-uniform nontrivial hypergraph is strictly larger than the maximum degree when kk is even. A tight lower bound for this eigenvalue is given. For a connected even-uniform hypergraph, this lower bound is achieved if and only if it is a hyperstar. However, when kk is odd, it happens that the largest Laplacian H-eigenvalue is equal to the maximum degree, which is a tight lower bound. On the other hand, tight upper and lower bounds for the largest signless Laplacian H-eigenvalue of a kk-uniform connected hypergraph are given. For a connected kk-uniform hypergraph, the upper (respectively lower) bound of the largest signless Laplacian H-eigenvalue is achieved if and only if it is a complete hypergraph (respectively a hyperstar). The largest Laplacian H-eigenvalue is always less than or equal to the largest signless Laplacian H-eigenvalue. When the hypergraph is connected, the equality holds here if and only if kk is even and the hypergraph is odd-bipartite.Comment: 26 pages, 3 figure

    A Tensor Analogy of Yuan's Theorem of the Alternative and Polynomial Optimization with Sign structure

    Full text link
    Yuan's theorem of the alternative is an important theoretical tool in optimization, which provides a checkable certificate for the infeasibility of a strict inequality system involving two homogeneous quadratic functions. In this paper, we provide a tractable extension of Yuan's theorem of the alternative to the symmetric tensor setting. As an application, we establish that the optimal value of a class of nonconvex polynomial optimization problems with suitable sign structure (or more explicitly, with essentially non-positive coefficients) can be computed by a related convex conic programming problem, and the optimal solution of these nonconvex polynomial optimization problems can be recovered from the corresponding solution of the convex conic programming problem. Moreover, we obtain that this class of nonconvex polynomial optimization problems enjoy exact sum-of-squares relaxation, and so, can be solved via a single semidefinite programming problem.Comment: acceted by Journal of Optimization Theory and its application, UNSW preprint, 22 page
    corecore